37 research outputs found

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Search for the Lepton Flavor Violation Processes J/ψJ/\psi \to μτ\mu\tau and eτe\tau

    Full text link
    The lepton flavor violation processes J/ψμτJ/\psi \to \mu\tau and eτe\tau are searched for using a sample of 5.8×107\times 10^7 J/ψJ/\psi events collected with the BESII detector. Zero and one candidate events, consistent with the estimated background, are observed in J/ψμτ,τeνˉeντJ/\psi \to \mu\tau, \tau\to e\bar\nu_e\nu_{\tau} and J/ψeτ,τμνˉμντJ/\psi\to e\tau, \tau\to\mu\bar\nu_{\mu}\nu_{\tau} decays, respectively. Upper limits on the branching ratios are determined to be Br(J/ψμτ)<2.0×106Br(J/\psi\to\mu\tau)<2.0 \times 10^{-6} and Br(J/ψeτ)<8.3×106Br(J/\psi \to e\tau) < 8.3 \times10^{-6} at the 90% confidence level (C.L.).Comment: 9 pages, 2 figure

    Search for Doubly-Charged Higgs Boson Production at HERA

    Get PDF
    A search for the single production of doubly-charged Higgs bosons H^{\pm \pm} in ep collisions is presented. The signal is searched for via the Higgs decays into a high mass pair of same charge leptons, one of them being an electron. The analysis uses up to 118 pb^{-1} of ep data collected by the H1 experiment at HERA. No evidence for doubly-charged Higgs production is observed and mass dependent upper limits are derived on the Yukawa couplings h_{el} of the Higgs boson to an electron-lepton pair. Assuming that the doubly-charged Higgs only decays into an electron and a muon via a coupling of electromagnetic strength h_{e \mu} = \sqrt{4 \pi \alpha_{em}} = 0.3, a lower limit of 141 GeV on the H^{\pm\pm} mass is obtained at the 95% confidence level. For a doubly-charged Higgs decaying only into an electron and a tau and a coupling h_{e\tau} = 0.3, masses below 112 GeV are ruled out.Comment: 15 pages, 3 figures, 1 tabl

    Sparsity and Compressed Sensing in Inverse Problems

    Full text link
    This chapter is concerned with two important topics in the context of sparse recovery in inverse and ill-posed problems. In first part we elaborate condi-tions for exact recovery. In particular, we describe how both `1-minimization and matching pursuit methods can be used to regularize ill-posed problems and more-over, state conditions which guarantee exact recovery of the support in the sparse case. The focus of the second part is on the incomplete data scenario. We discuss ex-tensions of compressed sensing for specific infinite dimensional ill-posed measure-ment regimes. We are able to establish recovery error estimates when adequately relating the isometry constant of the sensing operator, the ill-posedness of the un-derlying model operator and the regularization parameter. Finally, we very briefly sketch how projected steepest descent iterations can be applied to retrieve the sparse solution

    Wavelets and Time-Frequency Methods in Linear Systems and Neural Networks

    No full text
    In the first of this dissertation we consider the problem of rational approximation and identification of stable linear systems. Affine wavelet decompositions of the Hardy space H2 (II+), are developed as a means of constructing rational approximations to nonrational transfer functions. The decompositions considered here are based on frames constructed from dilations and complex translations of a single rational function. It is shown that suitable truncations of such decompositions can lead to low order rational approximants for certain classes of time-frequency localized systems. It is also shown that suitably truncated rational wavelet series may be used as 'linear-in-parameters' black box models for system identification. In the context of parametric models for system identification, time-frequency localization afforded by affine wavelets is used to incorporate a priori knowledge into the formal properties of the model. Comparisons are made with methods based on the classical Laguerre filters.The second part of this dissertation is concerned with developing a theoretical framework for feedforward neural networks which is suitable for both analysis and synthesis of such networks. Our approach to this problem is via affine wavelets and the theory of frames. Affine frames for L2, are constructed using combinations of sigmoidal functions and the inherent translations and dilations of feedforward network architectures. Time- frequency localization is used in developing methods for the synthesis of feedforward networks to solve a given problem.These two seemingly disparate problems both lie within the realm of approximation theory, and our approach to both is via the theory of frames and affine wavelets

    Frames Generated by Subspace Addition

    No full text
    Given two subspaces M and N of a Hilbert space, and frames associated with each of the subspace, the question addressed in this report is that of determining when the union of the two frames is a fame for the direct sum space M + N. We provide sufficient conditions for the union of the two frames to be a frame for, M + N and also estimates for the frame bounds. The results discussed here are given in terms of the relative geometry of subspaces. Some simple examples in which the frame bounds can be explicitly computed are provided to demonstrate accuracy of the frame bound estimates

    Neural Networks for Low Level Processing of Tactile Sensory Data

    No full text
    As the field of robotics continues to strive forward, the need for artificial tactile sensing becomes increasingly evident. Real-time, local processing of tactile sensory data also becomes a crucial issue in most applications of tactile sensing. In this thesis it is shown that analog neural networks provide an elegant solution to some of the problems of low level tactile data processing. We consider the particular problem of 'deblurring' strain data from an array of tactile sensors. It is shown that the inverse problem of deblurring strain measurements to recover the surface stress over a region of contact is ill-posed in the sense defined by Hadamard. This problem is further complicated by the corruption of sensor data by noise. We show that the techniques of 'regularization' may be used to introduce prior knowledge of the solution space into the solutions in order to transform the problem to one which is well-posed and less sensitive to noise. The particular regularizer chosen for the recovery of normal stress distributions is of the functional form of Shannon entropy. Formulation of the inverse problem so as to regularize the solutions result in a variational principles which must be solved in order to recover the surface stress. An analog neural network which provides the desired solutions to the variational principle as a course of natural time evolution of the circuit dynamics is proposed as a solution to the requirements for fast, local processing in tactile sensing. We discuss performance of the network in the presence of noise based upon computer simulations. We also demonstrate, by means of a breadboard prototype of the network, the speed of computation achievable by such a network. An integrated circuit implementation of the proposed network has been completed and the requirements of such implementations is discussed

    Analysis and Synthesis of Feedforward Neural Networks Using Discrete Affine Wavelet Transformations

    No full text
    In this paper we develop a theoretical description of standard feedfoward neural networks in terms of discrete affine wavelet transforms. This description aids in establishing a rigorous understanding of the behavior of feedforward neural networks based upon the properties of wavelet transforms. Time-frequency localization properties of wavelet transforms are shown to be crucial to our formulation. In addition to providing a solid mathematical foundation for feedforward neural networks, this theory may prove useful in explaining some of the empirically obtained results in the field of neural networks. Among the more practical implications of this work are the following: (1) Simple analysis of training data provides complete topological definition for a feedforward neural network. (2) Faster and more efficient learning algorithms are obtained by reducing the dimension of the parameter space in which interconnection weights are searched for. This reduction of the weight space is obtained via the same analysis used to configure the network. Global convergence of the iterative training procedure discussed here is assured. Moreover, it is possible to arrive at a non-iterative training procedure which involves solving a system of linear equations. (3) Every feedforward neural network constructed using our wavelet formulation is equivalent to a 'standard feedforward network.' Hence properties of neural networks, which have prompted the study of VLSI implementation of such networks are retained

    Affine Frames of Rational Wavelets in H2 (II+)

    No full text
    In this paper we investigate frame decompositions of H2(II+) as a method of constructing rational approximations to nonrational transfer functions in H2(II+). The frames of interest are generated from a single analyzing wavelet. We consider the case in which the analyzing wavelet is rational and show that by appropriate grouping of terms in a wavelet expansion, H2(II+) can be decomposed as an infinite sum of a rational transfer functions which are related to one another by dilation and translation. Criteria for selecting a finite number of terms from such an infinite expansion are developed using time-frequency localization properties of wavelets
    corecore